Goto

Collaborating Authors

 ultrasound signal


Masked Autoencoders for Ultrasound Signals: Robust Representation Learning for Downstream Applications

Roßteutscher, Immanuel, Drese, Klaus S., Uphues, Thorsten

arXiv.org Artificial Intelligence

We investigated the adaptation and performance of Masked Autoencoders (MAEs) with Vision Transformer (ViT) architectures for self-supervised representation learning on one-dimensional (1D) ultrasound signals. Although MAEs have demonstrated significant success in computer vision and other domains, their use for 1D signal analysis, especially for raw ultrasound data, remains largely unexplored. Ultrasound signals are vital in industrial applications such as non-destructive testing (NDT) and structural health monitoring (SHM), where labeled data are often scarce and signal processing is highly task-specific. We propose an approach that leverages MAE to pre-train on unlabeled synthetic ultrasound signals, enabling the model to learn robust representations that enhance performance in downstream tasks, such as time-of-flight (ToF) classification. This study systematically investigated the impact of model size, patch size, and masking ratio on pre-training efficiency and downstream accuracy. Our results show that pre-trained models significantly outperform models trained from scratch and strong convolutional neural network (CNN) baselines optimized for the downstream task. Additionally, pre-training on synthetic data demonstrates superior transferability to real-world measured signals compared with training solely on limited real datasets. This study underscores the potential of MAEs for advancing ultrasound signal analysis through scalable, self-supervised learning.


Deep Learning based acoustic measurement approach for robotic applications on orthopedics

Lan, Bangyu, Abayazid, Momen, Verdonschot, Nico, Stramigioli, Stefano, Niu, Kenan

arXiv.org Artificial Intelligence

In Total Knee Replacement Arthroplasty (TKA), surgical robotics can provide image-guided navigation to fit implants with high precision. Its tracking approach highly relies on inserting bone pins into the bones tracked by the optical tracking system. This is normally done by invasive, radiative manners (implantable markers and CT scans), which introduce unnecessary trauma and prolong the preparation time for patients. To tackle this issue, ultrasound-based bone tracking could offer an alternative. In this study, we proposed a novel deep learning structure to improve the accuracy of bone tracking by an A-mode ultrasound (US). We first obtained a set of ultrasound dataset from the cadaver experiment, where the ground truth locations of bones were calculated using bone pins. These data were used to train the proposed CasAtt-UNet to predict bone location automatically and robustly. The ground truth bone locations and those locations of US were recorded simultaneously. Therefore, we could label bone peaks in the raw US signals. As a result, our method achieved sub millimeter precision across all eight bone areas with the only exception of one channel in the ankle. This method enables the robust measurement of lower extremity bone positions from 1D raw ultrasound signals. It shows great potential to apply A-mode ultrasound in orthopedic surgery from safe, convenient, and efficient perspectives.


SpiceNews

#artificialintelligence

SoFi, directed with the help of a Super Nintendo controller paired with acoustic signals, has been engineered to help researchers explore marine life more freely in depth, and help us get closer to the expansive ecosystem that blooms beyond what our naked eyes can perceive. SoFi is essentially a soft robotic fish structure that consists of a controller, Raspberry Pi, and HiFi Berry, sealed inside a water proof silicone membrane that has been cast moulded. The membrane is also filled with a mineral oil that is non conductive, and allows for equalization underwater. The Raspberry Pi receives input from controller, after which ultrasound signals are amplified for SoFi through the HiFi Berry. These amplified ultrasound signals, which are interpreted by a modem embedded within SoFi's head, controls everything from directing tail movement, pitch and depth, to the on-board camera.


'Luke Skywalker' AI hand lets amputee play the piano

Daily Mail - Science & tech

A remarkable new type of prosthetic inspired by Luke Skywalker's bionic hand has allowed an amputee musician to play piano once again. Jason Barnes, who lost part of his right arm in a work accident five years ago, has been fitted with a prosthetic arm designed by Georgia Tech researchers to give the wearer individual control of each finger. Distinguishing it from other prosthetics on the market, the new device is powered by ultrasound signals to detect what movements a person wants to carry out. Most high-tech prosthetics – including Barnes' everyday prosthesis – are controlled by electromyogram (EMG) sensors, the researchers explain. While these allow for certain movements, they have some limitations.